Loading...
 

Refined isogeometric analysis in 1D

In this chapter we will look at matrices generated from the above-described B-spline basic functions, and we will estimate the factorization cost resulting from systems of equations. In this way, we can indicate the basis functions that give faster calculations using the isogeometric finite element method. For the sake of simplicity, let us establish that we are solving a one-dimensional projection problem. Details of the computations with a two-dimensional projection were described in the first chapter. We then have the following matrices called mass matrices.
Suppose we want to solve a one-dimensional projection problem
\( u(x) \approx BITMAP(x) \).
Details on the computations with the two-dimensional projections are described in chapter one. Here, for simplicity, let us assume we solve the one-dimensional projection problem. Our bitmap (which we approximate in chapter one) is now a one-dimensional bitmap.
This one-dimensional bitmap is approximated by a linear combination of a B-spline function
\( u =\sum_{i=1,...,N_x} u_{i} B^x_{i;p }(x) \).
Let us assume that we have 16 finite elements (16 intervals). Of course, it does not mean that \( N_x=16 \), because \( N_x \) is the number of basis functions. How many basis functions there will be depends on what our knot vector will look like - in other words, it depends on the span of basis functions and whether and where we repeat the knots between the elements.
In order to generate a system of equations with our matrix to the problem projection, we multiply our equation by the individual B-splines which are used to average our equations
\( u(x)=BITMAP(x) \)
according to the distribution described by the B-splines, and we integrate to compute average \( \int{\color{red}{u(x)}}B^x_{1;p}(x)dx=\int BITMAP(x)B^x_{1;p }(x)dx \\ \int{\color{red}{u(x)}}B^x_{2;p}(x)dx=\int BITMAP(x)B^x_{2;p}(x)dx \\ \vdots \\ \int{\color{red}{u(x)}}B^x_{N_x-1;p}(x)dx=\int BITMAP(x)B^x_{N_x-1;p}(x)dx \\\int{\color{red}{u(x)}}B^x_{N_x;p}(x)dx=\int BITMAP(x)B^x_{N_x;p }(x)dx \)
We put our approximation here \( {\color{red}{u(x)}}=\sum_{i=1,...,N_x} u_{i} B^x_{i;p }(x) \) and we get the system of equations: \( \int{\color{red}{\sum_{i=1,...,N_x} u_{i} B^x_{i;p}(x)} }B^x_{1;p}(x)dx=\int BITMAP(x)B^x_{1;p}(x) dx \\ \int{\color{red}{\sum_{i=1,...,N_x} u_{i} B^x_{i;p}(x)} }B^x_{2;p}(x)dx=\int BITMAP(x)B^x_{2;p}(x) dx \\ \vdots \\ \int{\color{red}{\sum_{i=1,...,N_x} u_{i} B^x_{i;p}(x)} }B^x_{N_x-1;p}(x)dx=\int BITMAP(x)B^x_{N_x-1;p}(x) dx \\ \int{\color{red}{\sum_{i=1,...,N_x} u_{i} B^x_{i;p}(x)} }B^x_{N_x;p}(x)dx=\int BITMAP(x)B^x_{N_x;p}(x) dx \)
We take the sum out
\( \sum_{i=1,...,N_x}\int{\color{red}{ u_{i} B^x_{i;p}(x)} }B^x_{1;p}(x)dx=\int BITMAP(x)B^x_{1;p}(x) dx \\ \sum_{i=1,...,N_x} \int{\color{red}{u_{i} B^x_{i;p}(x)} }B^x_{2;p}(x)dx=\int BITMAP(x)B^x_{2;p}(x) dx \\ \vdots \\ \sum_{i=1,...,N_x} \int{\color{red}{u_{i} B^x_{i;p}(x)} }B^x_{N_x-1;p}(x)dx=\int BITMAP(x)B^x_{N_x-1;p}(x) dx \\ \sum_{i=1,...,N_x} \int{\color{red}{ u_{i} B^x_{i;p}(x) } }B^x_{N_x;p}(x)dx=\int BITMAP(x)B^x_{N_x;p}(x) dx \)
We can write our system of equations as follows \( \begin{bmatrix}\int B^x_{1;p}(x)B^x_{1;p}(x)dx & \int B^x_{1;p}(x)B^x_{2;p}(x)dx & \cdots & \int B^x_{1;p}(x)B^x_{N_x;p}(x)dx \\ \int B^x_{2;p}(x)B^x_{2;p}(x)dx & \int B^x_{2;p}(x)B^x_{2;p}(x)dx & \cdots & \int B^x_{2;p}(x)B^x_{N_x;p}(x)dx \\ \vdots & \vdots & \vdots & \vdots \\ \int B^x_{N_x-1;p}(x)B^x_{1;p}(x)dx & \int B^x_{N_x-1;p}(x)B^x_{2;p}(x)dx & \cdots & \int B^x_{N_x-1;p}(x)B^x_{N_x;p}(x)dx \\ \int B^x_{N_x;p}(x)B^x_{N_x;p}(x)dx & \int B^x_{N_x;p}(x)B^x_{2;p}(x)dx & \cdots & \int B^x_{N_x;p}(x)B^x_{N_x;p}(x)dx \\ \end{bmatrix} \begin{bmatrix} u_{1} \\ u_{2} \\ \vdots \\ u_{N_x-1}\\ u_{N_x}\\ \end{bmatrix} = \) \( \begin{bmatrix} \int BITMAP(x)B^x_1(x)dx \\ \int BITMAP(x)B^x_2(x)dx \\ \vdots \\ \int BITMAP(x)B^x_{N_x}-1(x)dx \\ \int BITMAP(x)B^x_{N_x}(x)dx \\ \end{bmatrix} \)
Please note that the system of equations is sparse because the integrals of the products of one-dimensional B-spline basis functions are non-zero only when basis functions related to rows and columns of the matrix overlap.
Why is it like that? The integral is the sum of the samples of the function values which we integrate. Our sample is the product of two functions
\( i \)-th and \( j \)-th, \( B^x_{i;p}(x)B^x_{j;p}(x) \). If so for a given point at which we evaluate one of the functions, for example \( i \)-th is equal to zero, so we have \( 0*B^x_{j;p}(x)=0 \).
Likewise, if the second function \( j \)-this is equal to zero at the sampling point, we have then \( B^x_{i;p}(x)*0=0 \).
Let us assume now that we span a knot vector over our sixteen elements [0 0 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 16 16 16]. We obtain the third-degree B-spline functions used in the isogeometric finite element method. They are depicted in Fig. 1. This figure also illustrates the fact that successive rows and columns in the matrix are related to the following B-spline functions specified in the node vector. Note that each one-dimensional degree B-spline function p overlaps other \( p \) overlaps \( 2p+1 \) other one-dimensional B-spline function. For example, cubic B-splines \( p=3 \) overlaps \( 2*3+1=7 \) adjacent B-splines. Non-zero values in the matrix appear in these rows and columns, for which the corresponding basis functions "overlap" (they do not have zero intersection of supports).
Our system of equations is therefore sparse, and it contains seven diagonals. For our case of the sixteen finite elements, we have
\( N=16+3=19 \) cubic B-spline base functions (in total we have \( N_e+p \) of them). Our the matrix therefore has dimensions \( 19 \times 19 \) and contains 7 dense diagonals. This is illustrated in Fig. 2.
Let us now estimate the matrix factorization cost for the cubic B-spline base used in the isogeometric finite element method, shown in Fig. 2.
We run the Gaussian elimination algorithm that skips all zeros in matrix. It is described in chapter four.
This cost is equal to the cost of factorizing a single row on submatrix of size \( 4\times 4 \). This cost is related to the multiplication of the first row of size 4, and subtracting this row from other rows which is 3*3*2 (the cost of subtracting the first row from others, along with multiplication by the value from the diagonal). We repeat this operation for 16 blocks of the matrix (corresponding to 16 elements), and for the last block, we complete the factorization, which costs 2*2*2 (cost of subtracting the second row from the others, along with multiplying by diagonal value) + 2*1 (cost of subtracting the third row from the fourth, with multiplication by the value of the diagonal) + 1 (scaling the last line). The total cost amounts to 4 * 16 + 3 * 3 * 2 * 16 + 2 * 2 * 2 + 2 * 1 + 1 = 64 + 18 * 16 + 8 + 2 + 1 = 363.
Now consider the case where we repeat nodes in between individual finite elements. In general, if we want to get basis functions equivalent to Lagrange polynomials used in the classical finite element method, we need to repeat the nodes \( p-1 \) times, where \( p \) is the order of the B-spline base. For cubic B-splines, we need to repeat knots two times: [0 0 0 0 1 1 1 2 2 2 3 3 3 4 4 4 5 5 5 6 6 6 7 7 7 8 8 8 9 9 9 10 10 10 11 11 11 12 12 12 13 13 13 14 14 14 15 15 15 16 16 16 16].
What is the matrix of our system of equations? Each cubic B-spline base function spans over four elements (it uses five knots, ranging from 0 0 0 0 1, then 0 0 0 1 2, them 0 0 1 2 3, then 0 1 2 3 4, and so on up to 10 11 12 13 14, then 11 12 13 14 15, then 12 13 14 15 15, then 13 14 15 15 15, then 14 15 15 15 15, then to 15 16 16 16 16. How many such sequneces do we have in our knot vector? There are 16 * 3 + 1 = 49. Our matrix therefore has a dimension \( 49 \times 49 \), and is shown on Fig. 3. The difference between the matrix just obtained for the base equivalent to Lagrange polynomials with respect to the matrix obtained earlier for the function build with B-spline functions is as follows. The matrix for the B-spline function is smaller but it has denser diagonals. The matrix obtained for the basis equivalent to the Lagrange base is larger but sparser on diagonals. Now let us estimate the matrix factorization cost for the base equivalent to the Lagrange base used in the standard finite element method, shown in Fig. 3. This cost is equal to is the cost factorization of a single size sub-matrix \( 49 \times 49 \) by the full Gauss elimination algorithm, ie 4 + 3 + 2 (first, second, third, and fourth scaling cost line from diagonal to end of line) plus 3 * 3 * 2 (cost of subtruction of the first row from the rest rows, along with multiplication by the value of the diagonal) plus 2 * 2 * 2 (the cost of subtracting the second row from the rest, with multiplication by the diagonal value) + 1 * 2 (cost of subtracting of the third row from the fourth row, along with multiplying by value from the diagonal). The total cost for one block is 4 + 3 + 2 + 3 * 3 * 2 + 2 * 2 * 2 + 2 * 1 = 9 + 18 + 8 + 2 = 36. We have 16 items so the total cost is 36 * 16 = 576.
Now let us see what happens if we mix Lagrange polynomials with B-spline functions. Let us introduce a knots vector [0 0 0 0 1 2 3 4 4 4 5 6 7 8 8 8 9 10 11 12 12 12 13 14 15 16 16 16 16] in which we repeated knots in groups of every four elements. We will obtain then the base functions and the matrix shown in Fig. 4. In this case, we have the basis of 25 functions. So our matrix has size \( 25\times 25 \), and it has dense diagonal blocks of size 7 by 7. These blocks are separated by C0 separators. What is the cost of factoring in this case? This cost is equal to the cost of factorizing a single row from submatrix of size \( 4\times 4 \). Namely 4 (cost of scaling a row of four columns) plus 3 * 3 * 2 (cost of subtracting the first row from others, along with multiplication by the value from the diagonal). This operation we repeat for 4 submatrices in a block (corresponding to 4 elements), and we repeat the whole thing 4 times (because we have 4 blocks in the matrix). So the total cost is (4 + 3 * 3 * 2 * 4) * 4 = 76 * 4 = 304.
Let us summarize then. The classical finite element method for 16 elements and the base equivalent to the Lagrange base of the third order requires 576 floating-point operations to factorize the matrix (multiplications and additions / subtractions). The isogeometric finite element method for 16 elements of the base equivalent to the B-splines of the third order requires 363 floating-point operations. The hybrid method requires 304 floating-point operations. So, we can see that even in simple one-dimensional example, a hybrid method, between the cubic B-spline basis functions and the basis equivalent to the Lagrange's base wins in terms of the computational cost over a pure cubic B-splines base and with a pure base equivalent to Lagrange's base. This difference will be much larger (up to two orders of magnitude) for two-dimensional and three-dimensional problems.
Such systematic repetition of knot points in the knot vector to reduce the computational cost of the finite element method is called the refined isogeometric analysis [1]. Please also note that by inserting C0 separators (by reducing the continuity on the interfaces between elements) we increase the approximation space and we increase the approximative power of our basis functions. Hence, adding these new features and reducing the continuity actually increases the accuracy of the approximation.

An example of a computational mesh for a one-dimensional projection problem containing sixteen finite elements and element matrices generated on them.
Figure 1: An example of a computational mesh for a one-dimensional projection problem containing sixteen finite elements and element matrices generated on them.
An example of a one-dimensional problem obtained by the knot vector [[0 0 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 16 16 16] corresponding to the base B-spline functions of the third order spread over sixteen elements used by the isogeometric finite element method.
Figure 2: An example of a one-dimensional problem obtained by the knot vector [0 0 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 16 16 16] corresponding to the base B-spline functions of the third order spread over sixteen elements used by the isogeometric finite element method.
Example of a one-dimensional problem obtained with a knots vector [[0 0 0 0 1 1 1 2 2 2 3 3 3 4 4 4 5 5 5 6 6 6 7 7 7 8 8 8 9 9 9 10 10 10 11 11 11 12 12 12 13 13 13 14 14 14 15 15 15 16 16 16 16] corresponding to the basis functions equivalent to Lagrange polynomials of the third order spanning over sixteen elements used by the classical finite element method.
Figure 3: Example of a one-dimensional problem obtained with a knots vector [0 0 0 0 1 1 1 2 2 2 3 3 3 4 4 4 5 5 5 6 6 6 7 7 7 8 8 8 9 9 9 10 10 10 11 11 11 12 12 12 13 13 13 14 14 14 15 15 15 16 16 16 16] corresponding to the basis functions equivalent to Lagrange polynomials of the third order spanning over sixteen elements used by the classical finite element method.
An example of a one-dimensional problem obtained using a knot vector [[0 0 0 0 1 2 3 4 4 4 5 6 7 8 8 8 9 10 11 12 12 12 13 14 15 16 16 16 16] in which the nodes at the boundaries of groups of four elements were repeated, corresponding to the base functions B-spline of the third degree within groups of elements and Lagrange polynomials of the third degree spanning the boundaries of groups of elements.
Figure 4: An example of a one-dimensional problem obtained using a knot vector [0 0 0 0 1 2 3 4 4 4 5 6 7 8 8 8 9 10 11 12 12 12 13 14 15 16 16 16 16] in which the nodes at the boundaries of groups of four elements were repeated, corresponding to the base functions B-spline of the third degree within groups of elements and Lagrange polynomials of the third degree spanning the boundaries of groups of elements.

Ostatnio zmieniona Wtorek 14 z Czerwiec, 2022 16:41:49 UTC Autor: Maciej Paszynski
Zaloguj się/Zarejestruj w OPEN AGH e-podręczniki
Czy masz już hasło?

Hasło powinno mieć przynajmniej 8 znaków, litery i cyfry oraz co najmniej jeden znak specjalny.

Przypominanie hasła

Wprowadź swój adres e-mail, abyśmy mogli przesłać Ci informację o nowym haśle.
Dziękujemy za rejestrację!
Na wskazany w rejestracji adres został wysłany e-mail z linkiem aktywacyjnym.
Wprowadzone hasło/login są błędne.